10 research outputs found

    Perceptual Asymmetry and Sound Change: An Articulatory, Acoustic/Perceptual, and Computational Analysis

    Full text link
    Previous experimental study of the identification of stop and fricative consonants has shown that some consonant pairs are asymmetrically confused for one another, with listeners’ percepts tending to favor one member of the pair in a conditioning context. Researchers have also suggested that this phenomenon may play a conditioning role in sound change, although the mechanism by which perceptual asymmetry facilitates language change is somewhat unclear. This dissertation uses articulatory, acoustic, and perceptual data to provide insight on why perceptual asymmetry is observed among certain consonants and in specific contexts. It also uses computational modeling to generate initial predictions about the contexts in which perceptual asymmetry could contribute to stability or change in phonetic categories. Six experiments were conducted, each addressing asymmetry in the consonant pairs /k/-/t/ (before /i/), /k/-/p/ (before /i u/), /p/-/t/ (before /i/), and /θ/-/f/ (possibly unconditioned). In the articulatory experiment, vocal tract spatial parameters were extracted from real-time MRI video of speakers producing VCV disyllables in order to address the role of vocal tract shape in the target consonants’ vowel-dependent spectral similarity. The results suggest that, for consonant pairs involving /k/, CV coarticulation creates—as expected—vocal tract shapes that are most similar to one another in the environment conditioning perceptual asymmetry. However, CV coarticulation was less informative for explaining the vocalic conditioning of the /p/-/t/ asymmetry. In the second experiment, RF models were trained on acoustic samples of the target consonants from a speech corpus. Their output, which was used to identify frequency components important to the discrimination of consonant pairs, aligned well with these consonants’ spectral characteristics as predicted by acoustic models. A follow-up perception experiment that examined the categorization strategies of participants listening to band-filtered CV syllables generally showed listener sensitivity to these same components, although listeners were also sensitive to band-filtering outside the predicted frequency bands. Perceptual asymmetry is observed in CV and isolated C contexts. In the fourth experiment, a Bayesian analysis was performed to help explain why perceptual asymmetry appears when listening to isolated Cs, and a follow-up perception experiment helped to evaluate the relevance of this analysis to human perception. For /k/-/t/, for example, whose confusions favor /t/, this analysis suggested that [t] and [k] both have the highest likelihood of being generated by /t/ (relative to likelihood of /k/ generating each) in the context conditioning asymmetry. The follow-up study suggests listeners are more likely to categorize a [t] and [k] as /t/ if it has higher likelihood of being generated by /t/ (relative to /k/). The final experiment used agent-based modeling to simulate the intergenerational transmission of phonetic categories. Its results suggest that perceptual asymmetry can affect the acquisition of categories under certain conditions. A lack of reliable access to non-phonetic information about the speaker’s intended category or a tendency not to store tokens with low discriminability can both contribute to the instability of phonetic categories over time, but primarily in the contexts conditioning asymmetry. This dissertation makes several contributions to research on perceptual asymmetry. The articulatory experiment suggests that confusability can be mirrored by gestural ambiguity. The Bayesian analysis could also be used to build and test predictions about the confusability of other sounds by context. Finally, the model simulations offer predictions of the conditions where perceptual asymmetry could condition sound change.PHDLinguisticsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/155085/1/iccallow_1.pd

    Linguistic and Social Expectation Beyond The Gender Binary

    No full text
    Previous sociophonetic research has demonstrated that social information affects listeners’ linguistic decision-making. In particular, Strand and Johnson (1996) show that imputed (binary) gender shifts listeners’ sibilant category boundaries: listeners categorized ambiguous synthesized sibilants as /s/ more often when listening to a male speaker. Further research has shown sibilant identity influences how listeners categorize speakers’ (binary) gender, demonstrating a bidirectional relationship between social and linguistic information (Bouavichith et al 2019). Although listeners use categorical social information when it is explicitly given in a task, it is not understood how listeners perform when the same acoustic information is socially contextualized in two distinct ways. This study examines how sibilant category boundaries are shifted by acoustically masculinized speech: this manipulation is explained in one condition as the speaker’s gender transition and in the second as a digital manipulation. Participants complete a binary forced-choice linguistic decision task. Participants listen to ambiguous sibilant-initial lexical items (e.g., shack-sack) and indicate on a button-box which words they hear. Each token is created by splicing together one synthesized sibilant (from a /ʃ/-/s/ continuum) with a naturalistic production of the rime. Each rime, produced by a cis woman, is acoustically manipulated across two blocks to vary the perceived gender of the speaker. In Block 1, the rime of each stimulus is minimally manipulated. All listeners are told that the speaker identified as female at the time of the recording. In Block 2, the rimes are masculinized. Listeners are presented with either Condition 1) the same speaker no longer identified as female at the time of a second recording, or Condition 2) the acoustic stimuli have been digitally manipulated. In line with previous results, we hypothesize that listeners will shift their category boundaries between Blocks 1 and 2, toward more male-like sibilant boundaries. Further, we predict a greater degree of shift, in this direction, for listeners who were assigned Condition 1. This prediction assumes that listeners are sensitive to how speakers use phonetic variation to convey social meaning. This experiment serves as an effort to engage sociophonetic perception research with richer treatments of gender
    corecore